AI Assisted Development
That Actually Works
Ever notice the more you let it think for you, the worse the results get? This is how you fix that.
Context Engineering
Context engineering is optimising token utility to deal with the inherent constraints of LLMs. That sentence explains most of what separates developers who get good results from those who don't.
The model gives you what it thinks is most likely correct, based on your input. Vague in, vague out. Precise in, precise out. Your job is shaping the input until the most likely output is what you actually want.
Every time you send a message, the entire conversation history gets sent again. The model reads it fresh. No memory between sessions. No persistent understanding of your project beyond what you explicitly provide. It's like explaining your codebase to a new contractor every single time you need something done.
Your 200k token context window
How it fills up during a typical session
Run /context mid-session to see exactly how your context window is being used. A fresh session in a typical monorepo costs around 20k tokens baseline. The remaining 180k fills up fast.
Context Poisoning
When your conversation gets long and messy, the model makes worse decisions. Too much noise drowns out the signal. If you're fighting the model, the problem is almost always upstream.
Context poisoning in practice:
Context poisoning looks like
Long threads with multiple failed attempts. Contradictory instructions from earlier. Abandoned approaches still influencing decisions. Claude second-guessing itself constantly.
Clean context looks like
Fresh session with focused goal. Only relevant files mentioned. Clear requirements stated upfront. Previous decisions documented in files, not conversation history.
The fix is simple but counterintuitive: start fresh more often. A new conversation with clean, focused context will outperform a long thread where you've been iterating for hours.
The /clear workflow
When you've built up understanding with Claude, have it write that understanding to a markdown file before clearing context. Then reference that file in your fresh session. You keep the knowledge without the noise.
Planning Mode and Clarifying Questions
In Claude Code, hit shift + tab twice to enter Plan Mode. This forces the model to think before acting. No file edits, no commands, no changes until you approve.
But the real power is in what happens next. In Plan Mode, Claude uses the AskUserQuestion tool to interview you about ambiguous requirements before proposing anything.
This approach eliminates the endless loop of "that's not what I meant" corrections. Instead of making assumptions and going down the wrong path, Claude pauses and asks you directly what you want. By the time it starts coding, 90% of decisions are already made.
The developers getting the best results with large features work this way: start with a minimal spec, let Claude interview you to fill in the gaps, then execute while the shared understanding is still in context.
The workflow looks like this:
Enter Plan Mode with a minimal prompt
"Add user authentication to the API" is enough. Don't try to specify everything upfront.
Answer the clarifying questions
Claude asks 3-4 structured questions about architecture, tradeoffs, and edge cases. Each question has sensible options based on codebase analysis.
Review and refine the plan
Claude produces a detailed plan with task breakdown, dependencies, and execution order. Edit it directly if needed.
Execute with the plan context intact
Keep the same session. Claude understands the decisions you made together. The shared context is valuable here.
Chrome DevTools MCP for Visual Review
Claude Code can't see what its generated code actually looks like in the browser. It's coding blind. Chrome DevTools MCP fixes this.
This turns Claude into a virtual front-end debugger. Instead of relying on static code analysis, it examines the actual rendered state. If an element is computed to be 5000px wide due to a flexbox issue, Claude can see that and recommend a fix.
The visual iteration loop
The real power is having Claude review its own work. After implementing a UI component, tell Claude to screenshot it and verify the layout. It catches problems humans miss after staring at the same code for hours.
# After Claude implements a feature: "Navigate to localhost:3000/dashboard, take a screenshot, and verify the component matches what we discussed. Check for: - Correct spacing between cards - Hover states working - Mobile responsiveness at 375px width" # Claude will screenshot, analyse, and either confirm # or identify specific issues with line numbers for fixes
Available tools include: navigate pages, take screenshots, click elements, fill forms, check console errors, inspect network requests, and run performance traces. You can test entire user flows without leaving your conversation.
Fresh eyesUsing Separate Agents to Review Work
A simple but effective approach from Anthropic's engineering team: have one Claude write code while another reviews it. Similar to working with multiple engineers, sometimes having separate context is beneficial.
Why this works
The first agent accumulates context about implementation decisions, workarounds, and compromises. A fresh agent sees only the output, judging it without that baggage. Problems that seemed acceptable mid-implementation become obvious to fresh eyes.
There are two practical ways to do this:
1. Separate terminal sessions
Run two Claude Code instances. One implements, one reviews. They can even communicate through shared scratchpad files if you want them to collaborate.
2. Subagents for parallel review
Subagents run in their own context windows. Delegate security audits, test generation, or code review to specialised subagents without polluting your main conversation.
--- name: security-reviewer description: Reviews code for security vulnerabilities tools: Read, Grep, Bash model: sonnet --- You are a security-focused code reviewer. # Focus areas - Input validation and sanitisation - Authentication and authorisation flaws - SQL injection, XSS, CSRF - Secrets in code or logs - Dependency vulnerabilities # Output format For each issue: - Severity: Critical / High / Medium / Low - Location: file:line - Issue: clear description - Fix: specific recommendation
The main benefit is context isolation. Each subagent gets its own context window, preventing any single session from getting poisoned with too much implementation detail.
Persistent contextCLAUDE.md - Your Project Memory
A CLAUDE.md file in your project root gets loaded automatically every session. This is where you put everything Claude needs to know about your project: tech stack, conventions, architecture decisions, things to avoid.
# Project: E-commerce Dashboard # Stack: Vue 3 + TypeScript + Pinia + Vitest # Conventions - Components: PascalCase, single file - Composables: use* prefix, in src/composables/ - No any types. Use unknown and narrow. - Tests live next to source files # Architecture decisions - Auth tokens stored in httpOnly cookies, not localStorage - All API errors go through centralised handler - Feature flags via src/config/features.ts # Don't - Don't commit until I approve - Don't use Options API - Don't add npm dependencies without asking
Files merge hierarchically. Enterprise settings at the top, user preferences in ~/.claude/CLAUDE.md, then project root, then directory-specific files. You can put a CLAUDE.md in src/components/ with component-specific patterns.
Keep it concise. Every line gets loaded every time. Treat it as a reference guide, not documentation. Point to docs rather than duplicating them.
Better than MCPsSkills: Automatic Context Loading
Skills are folders with a SKILL.md file describing when the skill should activate. Unlike slash commands which you trigger manually, skills activate automatically when Claude recognises a matching context.
Skills are proving more reliable than MCPs for most use cases. Better context efficiency, more consistent activation at the right moments, and they don't sit there consuming tokens when you're not using them.
The key advantage: MCPs consume context constantly once connected. Skills load only when relevant, then unload. The model also follows skill instructions more reliably because there's less competing noise in the context window.
--- name: vue-testing description: Vue 3 component testing with Vitest tools: Read, Write, Bash --- When writing tests for Vue components: # Setup - Use @testing-library/vue for rendering - Mock Pinia stores with createTestingPinia() - Mock API calls, never hit real endpoints # Structure - Describe block per component - Test user behaviour, not implementation - One assertion per test where practical # Naming - ComponentName.test.ts in same directory - Describe: "ComponentName" - Test: "should [expected behaviour] when [condition]"
Put skills in ~/.claude/skills/ for personal use across all projects, or .claude/skills/ for project-specific ones. Claude will automatically load the vue-testing skill when you ask it to write tests for a Vue component.
Skills
High context efficiency. Activate automatically based on task. Better instruction following. Load on demand.
MCPs
Lower context efficiency. Always consume tokens when connected. Best for live external data like databases or APIs.
Slash Commands
Manual trigger only. Good for explicit workflows you want to invoke on demand. Code reviews, commit messages.
Hooks
Automatic lifecycle reactions. Run linting after edits, require approval for bash commands. No manual invocation.
Commands That Change Everything
Claude Code has features that most people never discover. These aren't hidden, but they're easy to miss if you're just using it as a chat interface. Each one solves a specific problem that would otherwise slow you down.
/rewind - Undo mistakes instantly
Made a wrong turn? Claude edited files you didn't want touched? The /rewind command lets you roll back to any previous point in your conversation. It reverts both the conversation state and any file changes.
This is particularly useful when you're experimenting. Try something ambitious, and if it doesn't work, rewind and try a different approach. No manual git stashing required.
/clear over /compact - Fresh context wins
When your context window fills up, you have two options: /compact to summarise and compress, or /clear to start fresh. From experience, /clear is almost always the better choice.
Compact tries to preserve important information while discarding the rest. In theory this sounds ideal. In practice, it often keeps the wrong things and Claude starts doing stuff you don't expect. The summarisation introduces subtle misunderstandings that compound over time.
The /compact trap
Keeps a compressed version of context that may not accurately represent what you actually need. Claude's behaviour becomes unpredictable because it's working from a lossy summary.
The /clear approach
Start fresh with exactly the context you need. Reference your CLAUDE.md, point to specific files, state your current goal. Takes 30 seconds and gives you reliable behaviour.
Don't be precious about your conversation history. If you've built up valuable understanding, have Claude write it to a markdown file before clearing. Then reference that file in your fresh session. You keep the knowledge without the accumulated noise.
Essential commands reference
/context
Check your context window usage. Run this regularly. If you're above 60%, consider clearing.
/clear
Nuke the conversation. Start fresh. Almost always better than /compact for maintaining output quality.
/rewind
Roll back to a previous checkpoint. Reverts both conversation and file changes.
/model
Check or switch models. Opus 4.5 handles both planning and implementation well. Only switch down for rate limits or cost.
/permissions
View and modify what Claude is allowed to do. Tighten for sensitive work.
/install-skill
Add skills from the community. Skills beat MCPs for most tasks - better context efficiency, more reliable timing.
Practical Use Cases
Some tasks are perfect for AI assistance. Others waste time. Knowing the difference matters more than any technique.
Analysing unfamiliar codebases
Point Claude Code at a complex codebase and ask questions. How does the auth flow work? Where are database connections handled? What happens when this endpoint gets called?
You can script out an entire SQL database, save the schema to files, and have Claude explain how everything connects. Understanding someone else's spaghetti becomes dramatically faster.
TDD workflow
Have Claude write tests first, then implement code to pass them. The tests become validation checkpoints. Instead of you manually checking if implementation works, Claude runs the tests itself to verify correctness.
SDLC automation
GitHub Actions combined with Claude Code handles the tedious parts: auto-generate tests on PR creation, update docs on PR approval, analyse bug reports in Issues. Keep human review in the loop for anything substantial.
Common mistakesWhat Not to Do
Pasting entire codebases
Context limits exist. Context poisoning is real. Be surgical about what you include. Reference files by path rather than pasting contents.
Skipping planning
Rewrites take longer than planning. The time you save is time you'll spend debugging. Use Plan Mode for anything non-trivial.
Assuming memory exists
Every conversation starts fresh. Claude doesn't remember yesterday's session. Set up context explicitly every time.
Using too many tools
Imagine using PowerPoint, Google Slides, Canva, and Keynote for slides. Pick Claude Code plus one backup and learn them properly.
Finding Your Workflow
Everything in this guide is a starting point. The developers getting the best results aren't following a prescribed method. They've experimented, failed, adjusted, and built workflows that fit how they actually work.
Some people swear by Plan Mode for everything. Others find it slows them down for quick fixes and only use it for multi-file changes. Some teams have elaborate CLAUDE.md files with dozens of conventions. Others keep it minimal and let skills handle the specifics.
There's no correct answer. There's only what works for you.
Start with the basics
Get comfortable with Plan Mode and the /context command. Understand how your context window fills up. These fundamentals apply regardless of what workflow you build.
Notice your friction points
Where do you waste time? Repeating the same instructions? Fixing the same types of mistakes? Context getting poisoned mid-session? Each friction point is an opportunity for a skill, hook, or workflow change.
Build incrementally
Don't try to set up the perfect system on day one. Add one skill when you notice a pattern. Create a slash command when you've typed the same prompt five times. Let your setup evolve with your needs.
Share what works
If you're on a team, document workflows that prove effective. A shared CLAUDE.md with team conventions. Skills for your tech stack. Slash commands for common review patterns. Consistency multiplies the benefits.
Example workflows people actually use
The spec-driven approach
Plan Mode interview to build spec. Clear context. New session references spec file. Implementation happens with all decisions already made. Works well for features with multiple stakeholders.
The TDD loop
Write tests first. Implement to pass. Review with fresh agent. Commit. Particularly effective for business logic where correctness matters more than speed.
The visual verification loop
Implement UI component. Chrome DevTools screenshot. Claude reviews its own work. Fix issues. Screenshot again. Best for frontend work where "it compiles" doesn't mean "it looks right".
The parallel review
One agent implements. Second agent reviews with fresh context. Third agent writes tests. Sounds heavyweight but moves faster than sequential work with context poisoning.
The point isn't to copy these exactly. It's to recognise that workflows exist on a spectrum from fully manual to heavily automated, and you get to choose where you sit based on your project, your team, and your preferences.
The meta-skill
Learning to use AI tools effectively is itself a skill that improves with practice. The developers who started six months ago and iterated on their workflows are dramatically more productive than developers just starting today with the same tools. Put in the reps. Pay attention to what works. Adjust.
The Bottom Line
Context engineering is the discipline. Plan Mode resolves ambiguity upfront. Chrome DevTools MCP lets Claude see its own work. Fresh agents provide unbiased review after you're done building.
The tools are good. Use them properly: planning first, visual verification, clear context for review. The results follow.
Shape context badly and you'll spend more time fixing AI output than you would have spent writing the code yourself.
The model is a tool. You're the engineer.

